On the last day of my last music theory course, the teacher said something that rewired my brain:

"Actually, you can do anything, anytime."

That was it. Years of scales, modes, harmonic functions, dominant resolutions, II-V-I progressions — and the conclusion was: there are no rules.

My first thought: couldn't you have mentioned this on day one?

But of course not. "There are no rules" on day one is meaningless — an empty permission slip. "There are no rules" after years of learning why the rules exist, what they protect against, and when breaking them creates magic? That's freedom. Real freedom, earned through understanding, not gifted through ignorance.

I think about this constantly when working with AI coding assistants.

The Danger of Shortcuts Without Context

Silicon Valley loves a motto. "Move fast and break things." "Don't let perfect be the enemy of good." "Ship fast, iterate."

These aren't wrong, but they're dangerous without context — like handing someone "there are no rules" without the journey that gives it meaning.

Take "move fast and break things." Facebook's internal motto during hypergrowth, which Zuckerberg retired in 2014 (conveniently forgotten detail). The motto assumed robust observability, rapid deployment pipelines, feature flags, canary releases — a culture where "break" meant "detect and fix in minutes," not "break and pray nobody notices." It assumed you knew when things broke.

Strip away that context and you're just licensing chaos. A developer hearing "move fast" without understanding monitoring or rollback strategies isn't being empowered — they're being set up to fail spectacularly.

Now add AI to this mix. When Claude Code can generate a working function faster than you can type the method signature, "move fast" takes on new meaning. But the context requirements haven't changed — if anything, they've intensified. You still need to know when something breaks, except now the thing that might be broken was generated by a system that doesn't understand the business logic, the edge cases, or the reasons your API works the way it does.

The Bell Curve at AI Speed

There's a pattern every experienced developer recognizes — how code evolves across a career:

Junior code is simple. A bit imprecise, sometimes clunky, but straightforward. It does the thing without ceremony, not because the developer chose simplicity, but because simplicity is all they know.

Mid-level code is complex. This is where you discover design patterns, SOLID principles, hexagonal architecture, the strategy pattern, the factory that produces other factories. And you use all of it, because knowing these tools feels powerful, and every problem looks like an excuse to deploy your full arsenal. The code becomes an elaborate cathedral of abstractions.

Senior code is simple again. But it's informed simplicity — deliberate, earned. It does exactly what's needed with minimal ceremony. Not because the developer doesn't know the patterns, but because they know when not to use them.

It's a bell curve of complexity: naïve simplicity → sophisticated complexity → informed simplicity. The journey typically takes years.

AI compresses this timeline dramatically. You can now experience the complexity phase in weeks rather than years — Claude Code will happily generate sophisticated patterns, dependency injection frameworks, and elaborate abstractions based on a comment. But here's the thing that keeps me up at night: the tool can generate the complexity, but can you earn the wisdom that tells you when to step back from it?

I've watched developers use AI to jump straight to the sophisticated complexity phase without building the intuition that usually develops over time. They get code that works, that follows patterns, that looks professional. But when something breaks — and something always breaks — they're debugging abstractions they never really understood in the first place.

The bell curve hasn't disappeared. It's just been compressed into a different shape, and the risk is that you skip the struggle that creates understanding.

When the Brilliant Friend Is an Algorithm

There's a test I've always used for code quality: would a not-necessarily-brilliant, slightly skeptical friend be able to read this and understand what's happening? It's a good filter — forces you to write code that's clear without being condescending, sophisticated without being obtuse.

AI has made this test more complicated.

ChatGPT can generate code that passes the brilliant friend test with flying colors. Clean variable names, well-structured functions, thoughtful comments. It sounds brilliant. But brilliance and understanding aren't the same thing. The code might be elegant, but does it understand your problem? Does it handle the edge case where the third-party API returns malformed JSON on Tuesdays? Does it know that the user_id field is sometimes null in legacy data?

I've started thinking about a different test: the 3 AM debugging test. When something goes wrong at 3 AM and you're staring at this code under pressure, can you trace through it? Do you understand not just what it does, but why it does it that way? AI-generated code often optimizes for looking right rather than being debuggable under stress.

Excellence in the AI era increasingly means knowing when to trust the assistant and when to rebuild from first principles. When to let the AI explore the solution space and when to roll up your sleeves and walk the walk yourself.

The Schools Still Apply (But the Arguments Have Shifted)

Even the experts disagree about what "good" looks like, and AI hasn't resolved these debates — it's reframed them.

The Agile-era school (Fowler, Uncle Bob, Kent Beck) emphasizes clean code, test-driven development, refactoring. Their principles work beautifully with AI — when you have clear tests, AI can refactor fearlessly. When your functions are small and well-named, AI can understand them better too.

The pragmatist-hacker school (DHH, Carson Gross) focuses on developer happiness and shipping fast. AI supercharges this approach — you can build features faster, explore more variations, iterate more quickly. But you can also accumulate technical debt at AI speed if you're not careful.

The performance-craftsman school (Jonathan Blow, Casey Muratori) distrusts abstractions and values understanding what the machine actually does. They have a point that's become more urgent: when AI generates layers of abstraction automatically, how do you maintain that machine-level understanding?

These schools still disagree on important things. But they all share something crucial: the belief that understanding your system matters. AI doesn't eliminate that requirement — it makes the skill of developing understanding even more valuable.

Trust as Your North Star

If excellence is a direction rather than a destination, how do you navigate? The answer is trust — how much others can rely on what you build.

Trust in code means certainty that assumptions hold. When you call a function, it behaves as documented. When you deploy a service, it handles edge cases gracefully. Some of this comes from the language itself — Rust's type system gives you more guarantees than JavaScript. But most of it comes from craftsmanship.

Here's what's interesting about trust in the AI era: you can build trustworthy systems using AI-generated components, but only if you understand the components well enough to verify they do what you think they do. The trust calibration problem is real — how do you know when to trust AI output versus when to write it yourself?

I've developed a heuristic: if I can't quickly explain to someone else why the AI's approach makes sense for this specific problem, I probably shouldn't ship it. The AI might be right, but I haven't done the work to understand why.

Those small moments of discipline — taking the time to understand rather than just accepting output that works — that's where excellence lives in the AI era. Not in refusing to use the tools, but in using them thoughtfully.

The Path, Accelerated

Excellence isn't knowing all the patterns. It isn't following any particular school of thought. It isn't writing perfect code — no such thing exists.

It's the attitude of caring. Paying attention when it would be easier not to. Learning the rules deeply enough to know when to break them. Understanding that the bell curve of complexity isn't just a career arc — it's a daily practice, a constant pull toward doing more with less.

AI has compressed the timeline, but it hasn't changed the fundamental path. You still need to cross through complexity to reach informed simplicity. You still need to earn the understanding that lets you break the rules responsibly. The difference is that now you can experience more of the journey in parallel rather than sequence — exploring sophisticated approaches while building foundational understanding.

My music teacher was right. You can do anything, anytime. But the journey to earn that freedom — the scales, the theory, the deliberate practice — isn't an obstacle to bypass. It is the thing. AI can accelerate the journey, but it can't take it for you.

And if that sounds too philosophical for a conversation about software — well, maybe software could use more philosophy. The frameworks will change faster now. The patterns will evolve more quickly. But the ability to navigate complexity with grace, to know what matters and what doesn't, to build things others can trust? That's the part that's more valuable than ever.

The path is the same. The speed limit just got lifted.